neighbor point
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > Canada (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Information Technology > Artificial Intelligence > Vision (0.97)
- Information Technology > Artificial Intelligence > Robots (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > Canada (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Information Technology > Artificial Intelligence > Vision (0.97)
- Information Technology > Artificial Intelligence > Robots (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Revealing Bias Formation in Deep Neural Networks Through the Geometric Mechanisms of Human Visual Decoupling
Ma, Yanbiao, Liu, Bowei, Dai, Wei, Chen, Jiayi, Li, Shuo
Deep neural networks (DNNs) often exhibit biases toward certain categories during object recognition, even under balanced training data conditions. The intrinsic mechanisms underlying these biases remain unclear. Inspired by the human visual system, which decouples object manifolds through hierarchical processing to achieve object recognition, we propose a geometric analysis framework linking the geometric complexity of class-specific perceptual manifolds in DNNs to model bias. Our findings reveal that differences in geometric complexity can lead to varying recognition capabilities across categories, introducing biases. To support this analysis, we present the Perceptual-Manifold-Geometry library, designed for calculating the geometric properties of perceptual manifolds.
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
Probability-density-aware Semi-supervised Learning
Liu, Shuyang, Zheng, Ruiqiu, Shen, Yunhang, Li, Ke, Sun, Xing, Yu, Zhou, Lin, Shaohui
Semi-supervised learning (SSL) assumes that neighbor points lie in the same category (neighbor assumption), and points in different clusters belong to various categories (cluster assumption). Existing methods usually rely on similarity measures to retrieve the similar neighbor points, ignoring cluster assumption, which may not utilize unlabeled information sufficiently and effectively. This paper first provides a systematical investigation into the significant role of probability density in SSL and lays a solid theoretical foundation for cluster assumption. To this end, we introduce a Probability-Density-Aware Measure (PM) to discern the similarity between neighbor points. To further improve Label Propagation, we also design a Probability-Density-Aware Measure Label Propagation (PMLP) algorithm to fully consider the cluster assumption in label propagation. Last but not least, we prove that traditional pseudo-labeling could be viewed as a particular case of PMLP, which provides a comprehensive theoretical understanding of PMLP's superior performance. Extensive experiments demonstrate that PMLP achieves outstanding performance compared with other recent methods.
- Asia > China (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Formation-Controlled Dimensionality Reduction
Dimensionality reduction represents the process of extracting low dimensional structure from high dimensional data. High dimensional data include multimedia databases, gene expression microarrays, and financial time series, for example. In order to deal with such real-world data properly, it is better to reduce its dimensionality to avoid undesired properties of high dimensions such as the curse of dimensionality [14, 11]. As a result, classification, visualization, and compression of data can be expedited, for example [14]. In many problems, it is presumed that the dimensionality of the measured data is only artificially high; the measured data are high-dimensional but data nearly have a lower-dimensional structure, since they are multiple, indirect measurements of an underlying factors, which typically cannot be directly calibrated [4].
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Research Report (0.50)
- Overview (0.46)
Curvature Augmented Manifold Embedding and Learning
Dimension reduction (DR) is a long-lasting and focused area in engineering, science, and machine learning communities. It may have different names and preferences depending on the individual field. For example, in engineering, it can referred to as reduced-order modeling, and it is closely related to data visualization in machine learning. The core concept is to solve the curse of dimensionality by projecting the data features to a low dimensional space (2D or 3D for data visualization problems, but not necessary for general DR problems). Once the low-dimensional data structure is obtained, many analyses, such as classification and regression, can be done conveniently compared to their counterparts in the high-dimensional spaces. The DR method can be traced back to the most widely used principal component analysis (PCA) [1], a linear DR method based on the eigenvalue problems of all data points. PCA has alternative names in engineering and science, such as proper orthogonal decomposition [2] in structural dynamics and Kahunen-Leove expansion in engineering statistics[3]. The nonlinear DR method has been proposed to improve the apparent limitation of the linear DR method, such as locally linear embedding (LLE)[4], ISOMAP[5], and Laplacian Eignemap[6], among many others. A detailed review of these earlier developments can be found in [7].
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- Asia > Japan > Honshū > Tōhoku (0.04)
- Research Report (1.00)
- Overview (0.67)
Visualizing Data using GTSNE
High-dimensional data visualization is a very important problem for human to sense the data. Currently, the state of art methods are t-SNE (Laurens et al. (2008), Laurens van der Maaten (2013)) and UMAP (Mcinnes and Healy (2018)), which has similar principle for the nonlinear low dimension reduction. They use neighborhood probability distribution to connect the high-dimensional data points to low-dimensional map points, which try to make the local relative neighborhood relation unchanged but ignoring the change in the macro structure of the data. However, this may make the low dimension map points representing the high-dimensional structure unfaithfully. In the low-dimensional neighborhood keeping and patching process, t-SNE sometimes will make the neighborhood relations in the highdimensional structure break in the the low-dimensional space. We add a macro loss term on the loss of t-SNE to make it keep the relative k-means centroids structure in the low and high dimensional space, which basically keep the macro structure unchanged in the low dimensional space.